自我监督学习(SSL)在各种下游视觉任务上表现出色。已经提出了两个主流SSL框架,即实例歧视(ID)和蒙版图像建模(MIM)。 ID从同一图像中汇总了不同视图的表示,同时避免了特征崩溃。它在线性探测器上表现良好,但在检测性能方面较低。另一方面,MIM重建了给定的蒙版图像的原始内容。它在密集的预测下表现出色,但在线性探测方面表现不佳。它们的区别是由于忽略语义一致性或空间敏感性的表示要求而引起的。具体而言,我们观察到(1)语义对齐要求在语义上相似的观点要投影到附近的代表中,这可以通过将不同的观点与强烈的增强进行对比来实现; (2)空间灵敏度需要对图像中的局部结构进行建模。因此,用掩盖图像预测致密表示是有益的,因为它模拟了图像含量的条件分布。在这些分析的驱动下,我们提出了暹罗图像建模(SIM),该图像模型(SIM)预测了增强视图的密集表示,基于来自同一图像的另一种掩盖视图,但具有不同的增强。我们的方法使用一个带有两个分支的暹罗网络。在线分支编码第一个视图,并根据这两个视图之间的相对位置预测第二视图的表示。目标分支通过编码第二视图来产生目标。通过这种方式,我们能够分别使用ID和MIM实现可比的线性探测和密集的预测性能。我们还证明,可以在没有全球损失的情况下获得体面的线性探测结果。代码应在https://github.com/fundamentalvision/siamese-image-modeling上发布。
translated by 谷歌翻译
自我监督的学习表明它有可能在没有人为注释的情况下提取强大的视觉表现。提出各种作品从不同的角度处理自我监督的学习:(1)对比学习方法(例如,MOCO,SIMCLR)利用阳性和阴性样品来引导训练方向; (2)不对称网络方法(例如,BYOL,SIMSIAM)通过引入预测器网络和止动梯度操作来摆脱阴性样本; (3)特征去相关方法(例如,Barlow Twins,ViCREG),而是旨在降低特征尺寸之间的冗余。这些方法在各种动机的设计损失功能中看起来非常不同。最终的准确度数也各不相同,其中不同的网络和技巧在不同的作品中使用。在这项工作中,我们证明这些方法可以统一成相同的形式。我们不是比较他们的损失函数,我们通过梯度分析推出统一的公式。此外,我们进行公平和详细的实验以比较他们的表现。事实证明,这些方法之间几乎没有差距,并且使用动量编码器是提高性能的关键因素。从这个统一的框架来看,我们提出了一个简单但有效的自我监督学习的简单但有效的渐变形式。它不需要内存银行或预测的网络,但仍然可以实现最先进的性能,并轻松采用其他培训策略。广泛的线性评估实验和许多下游任务也表现出其有效性。代码应释放。
translated by 谷歌翻译
损失函数在培训基于网络的对象探测器方面发挥着重要作用。对象检测的最广泛使用的评估度量是平均精度(AP),其同时捕获本地化和分类子任务的性能。然而,由于AP度量标准的非可分性性质,传统的对象探测器采用两个子任务采用单独的可分散损耗。这种错误对齐问题可能会导致性能下降。为了解决这个问题,现有的作品寻求手动设计AP公制的代理损失,这需要专业知识,并且可能仍可能是次优。在本文中,我们提出了参数化的AP损耗,其中引入参数化功能以替换AP计算中的非微弱组件。因此,不同的AP近似由统一公式中的参数化函数系列表示。然后采用自动参数搜索算法来搜索最佳参数。具有三种不同对象探测器的CoCo基准的广泛实验(即,RetinAnet,更快的R-CNN和可变形DETR)表明,所提出的参数化AP损耗始终如一地优于现有的手工损失。代码在https://github.com/fundamentalvision/parameterized-ap-loss发布。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
In contrast to the control-theoretic methods, the lack of stability guarantee remains a significant problem for model-free reinforcement learning (RL) methods. Jointly learning a policy and a Lyapunov function has recently become a promising approach to ensuring the whole system with a stability guarantee. However, the classical Lyapunov constraints researchers introduced cannot stabilize the system during the sampling-based optimization. Therefore, we propose the Adaptive Stability Certification (ASC), making the system reach sampling-based stability. Because the ASC condition can search for the optimal policy heuristically, we design the Adaptive Lyapunov-based Actor-Critic (ALAC) algorithm based on the ASC condition. Meanwhile, our algorithm avoids the optimization problem that a variety of constraints are coupled into the objective in current approaches. When evaluated on ten robotic tasks, our method achieves lower accumulated cost and fewer stability constraint violations than previous studies.
translated by 谷歌翻译
Surgical robot automation has attracted increasing research interest over the past decade, expecting its huge potential to benefit surgeons, nurses and patients. Recently, the learning paradigm of embodied AI has demonstrated promising ability to learn good control policies for various complex tasks, where embodied AI simulators play an essential role to facilitate relevant researchers. However, existing open-sourced simulators for surgical robot are still not sufficiently supporting human interactions through physical input devices, which further limits effective investigations on how human demonstrations would affect policy learning. In this paper, we study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning. Specifically, we establish our platform based on our previously released SurRoL simulator with several new features co-developed to allow high-quality human interaction via an input device. With these, we further propose to collect human demonstrations and imitate the action patterns to achieve more effective policy learning. We showcase the improvement of our simulation environment with the designed new features and tasks, and validate state-of-the-art reinforcement learning algorithms using the interactive environment. Promising results are obtained, with which we hope to pave the way for future research on surgical embodied intelligence. Our platform is released and will be continuously updated in the website: https://med-air.github.io/SurRoL/
translated by 谷歌翻译
In this paper, a semantic communication framework for image transmission is developed. In the investigated framework, a set of servers cooperatively transmit images to a set of users utilizing semantic communication techniques. To evaluate the performance of studied semantic communication system, a multimodal metric is proposed to measure the correlation between the extracted semantic information and the original image. To meet the ISS requirement of each user, each server must jointly determine the semantic information to be transmitted and the resource blocks (RBs) used for semantic information transmission. We formulate this problem as an optimization problem aiming to minimize each server's transmission latency while reaching the ISS requirement. To solve this problem, a value decomposition based entropy-maximized multi-agent reinforcement learning (RL) is proposed, which enables servers to coordinate for training and execute RB allocation in a distributed manner to approach to a globally optimal performance with less training iterations. Compared to traditional multi-agent RL, the proposed RL improves the valuable action exploration of servers and the probability of finding a globally optimal RB allocation policy based on local observation. Simulation results show that the proposed algorithm can reduce the transmission delay by up to 16.1% compared to traditional multi-agent RL.
translated by 谷歌翻译
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, existing methods developed for this purpose are typically discriminative, computing features of local subgraphs around two neighboring nodes and predicting potential links between them from the perspective of subgraph classification. In this formalism, the selection of enclosing subgraphs and heuristic structural features for subgraph classification significantly affects the performance of the methods. To overcome this limitation, this paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP. Instead of sampling positive and negative links and heuristically computing the features of their enclosing subgraphs, GraphLP utilizes the feature learning ability of deep-learning models to automatically extract the structural patterns of graphs for link prediction under the assumption that real-world graphs are not locally isolated. Moreover, GraphLP explores high-order connectivity patterns to utilize the hierarchical organizational structures of graphs for link prediction. Our experimental results on all common benchmark datasets from different applications demonstrate that the proposed method consistently outperforms other state-of-the-art methods. Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
translated by 谷歌翻译